Modeling and analysis for the image mapping spectrometer
Yuan Yan, Ding Xiao-Ming, Su Li-Juan, Wang Wan-Yue
Key Laboratory of Precision Opto-Mechatronics Technology, Ministry of Education, Beihang University, Beijing 100191, China

 

† Corresponding author. E-mail: yuanyan@buaa.edu.cn

Abstract

The snapshot image mapping spectrometer (IMS) has advantages such as high temporal resolution, high throughput, compact structure and simple reconstructed algorithm. In recent years, it has been utilized in biomedicine, remote sensing, etc. However, the system errors and various factors can cause cross talk, image degradation and spectral distortion in the system. In this research, a theoretical model is presented along with the point response function (PRF) for the IMS, and the influence of the mirror tilt angle error of the image mapper and the prism apex angle error are analyzed based on the model. The results indicate that the tilt angle error causes loss of light throughput and the prism apex angle error causes spectral mixing between adjacent sub-images. The light intensity on the image plane is reduced to 95% when the mirror tilt angle error is increased to ( ). The prism apex error should be controlled within the range of 0–36 (0.01 ) to ensure the designed number of spectral bands, and avoid spectral mixing between adjacent images.

1. Introduction

Imaging spectrometers can collect two-dimensional (2D) spatial information and one-dimensional (1D) spectral information (called a three-dimensional (3D) datacube) of a scene. It is a rapidly growing technology which has been utilized in many fields including remote sensing,[1] security,[2] biomedicine,[3,4] environment monitoring,[5] and so on in recent decades. The image mapping spectrometer (IMS) is a kind of snapshot imaging spectrometer which has no scanning part and can obtain a 3D datacube of the objects in a single integral time.[6] The IMS was presented by Rice University in 2009.[7] It was developed from the integral field spectrometers (IFSs) which have been used in astronomy for decades.[816] Compared with the integral field unit (IFU) used in other types of IFSs, the image mapper used in the IMS consists of multiple narrow mirror facets and is carefully designed to ensure that the system of IMS has much more compact structure and higher spatial resolution than other IFSs.[17]

Gao et al. introduced the operation principle of the IMS and developed the first prototype which can acquire a datacube of 100×100×25.[7] Then, the systems were improved to acquire datacubes of 285×285×60 and 350×350×46, which were utilized in hyperspectral microscopy,[18] spectral imaging endoscope,[19] and remote sensing.[20] These instruments can be used for cell fluorophore discrimination,[21,22] tissue clinical diagnostics,[23,24] and gas detection.[25] Moreover, the IMS can also be combined with the structured illumination and optical coherence tomography technology for more applications in biomedicine.[2628] At the same time, some system errors and various factors influence the system performance, such as the diffraction, scattering from the reflecting surfaces and edges, surface form errors, surface roughness and width variations, and “edge eating”, etc.[29] In order to improve the performance of the IMS, the errors of the image mapper were explored and corrected.[30,31] Kester et al. presented a model of the image mapper and analyzed the surface form errors of the strip mirrors.[31] They analyzed the light intensity distribution on the pupil array plane. The results suggested that the surface form error would contribute significantly to the cross talk which causes spectral information to be distorted and spectral images to be degraded. Based on the model, they optimized the design of the image mapper to reduce the system cross talk. However, they only modeled the image mapper for its optimal design without considering the dispersion process nor other factors of the entire spectral imaging system, and the point response function (PRF) of IMS was not deduced. As a result, the influence on the quality of the dataset from some system issues mentioned above cannot be analyzed by this model.

In this paper, we present a theoretical model of the IMS including the light propagation from object plane to image plane with the dispersion process based on scalar diffraction theory. The rest of the present paper is organized as follows. The system structure and operation principles of the IMS are introduced in Section 2. The PRF for the light propagation through the entire system is derived in Section 3. In Section 4, simulation experiments are conducted to generate synthetic spectral imaging data acquired by an IMS. In Section 5, the influences of the mirror tilt angle error of the image mapper and the prism apex angle error are analyzed based on the presented model. Finally, some conclusions are drawn from the present study and the future work is suggested in Section 6.

2. Principles of the snapshot image mapping spectrometer

The snapshot IMS is constituted by a fore optical system and an imaging spectrometer as illustrated in Fig. 1. The fore optical system (named as the fore optics) is comprised of an aperture and lens L1, which images an object to the image mapper. The image mapper is an array of long strip mirrors arranged with 2D tilt angles as shown in Fig. 2. The fore optics is telecentric in the image space, so the imaging principle rays are parallel to the optical axis. In the imaging spectrometer, the reflected light from each mirror is collimated by the lens L2 and dispersed by a corresponding prism. Finally, the dispersion light is imaged onto the sensor by the lens L3 placed behind the prism array. In the IMS, the image mapper divides the primary image and reflects the incident light toward different directions that are in accordance with the normal vectors of the mirrors. And the dispersive prism and the imaging lens L3 should be arranged in the array correspondingly.[17]

Fig. 1. (color online) Schematic diagram of the IMS.
Fig. 2. (color online) Schematic layout of the image mapper. We establish the coordinate to illustrate the image mapper structure. The substrate plane of the image mapper lies on the plane. The axis is parallel to the length of the strip mirror. Panel (a) shows a 3D model, which has only eight strip mirrors for clarity (in a practical system, the image mapper usually contains hundreds of strip mirrors). The mirrors are grouped into two blocks and each block contains four mirrors with different 2D tilt angles (αm, β ), where , 2, and , 2. The mirror tilt variations are periodically repeated in different blocks. Panel (b) shows an illustration of the definition of (αm, β ). Here, the vector is introduced as the normal of a strip mirror. The tilt angle between and the plane is represented by αm, and βn is the tilt angle between and the plane.

In order to analyze the light wave transmission from object to image in the IMS, mathematical coordinates are needed such as ( ) for the object plane, (ξ, η) for the plane of the aperture stop, ( ) for the image mapper, and ( ) for the sensor. As indicated in Fig. 1, is the distance from the object to the aperture stop and is the image distance in the fore optics. f1, f2, and f3 are the focal lengths for L1, L2, and L3, respectively.

As a shift-variant incoherent imaging system, the monochromatic image on the sensor of the IMS can be expressed as

where , , λ) is the spectral radiation of the object, and , ; , is the spatial imaging point response function (PRF) of the IMS at wavelength λ. In this paper, the PRF is derived by using scalar diffraction theory.[32]

3. Imaging model

In this section, the imaging model of IMS is derived based on the following procedures. First, the object is imaged on the image mapper through fore optics. Then, the strip mirrors slice and reflect the image plane into the imaging spectrometer. The sliced strips are dispersed and imaged through the prism array and lens array. Since the image mapper position variation caused by the tilt is controlled within the depth of field range of the fore optics, the influence of tilt of the image mapper is not included in the model.

3.1. Imaging model of the fore optics

Considering the fore optics in Fig. 1, the object denoted by the coordinate ( , ) is imaged by a telecentric imaging lens. The aperture generalizes the pupil function as , and then the impulse response function (IRF) of the fore optics is given as

where , with λ being the imaging wavelength, and ( ) is defined as the principal plane of L1 which is considered as a single thin lens for simplicity. And, . The details of the derivations and the simplifications necessary to obtain Eq. (2) are given in Appendix A.

Therefore, PRF of the fore optics is

3.2. Slicing and reflecting model of the image mapper

The image mapper is comprised of long strip mirrors as shown in Fig. 2(a). The coordinate system is established in Fig. 2(a) with the axis along the strip length of the mirror. The normal vector direction of the mirror is defined as (αm, β ) as illustrated in Fig. 2(b), where and , and the mirrors are arranged orderly to form a block also shown in Fig. 2(a). The mirror arrangement is replicated in other blocks.

The reflection function of the image mapper is defined as[31]

where ( ) represents the coordinates on the image mapper, l and b are the length and the width of the image mapper respectively, c is the block width where , and w is the width of the image mapper. The reflection coefficient of the image mapper is , which is assumed to be uniform for all mirrors. The block number is Nc.

Therefore, the IRF after the image mapper becomes

3.3. Virtual aperture array

The aperture stop situated on the object space focal plane of L1 forms a virtual aperture array on the image space focal plane of L2. The basic configuration between the aperture stop and the virtual aperture array can be treated as the system[31] shown in Fig. 3.

Fig. 3. (color online) Generation of the virtual aperture array.

There are virtual apertures coinciding with the direction numbers of the image mapper. The plane of the virtual aperture array is denoted as ( ) in Fig. 3, and ( ) are the coordinates of the aperture center, which are given by

where , , and f2 is the focal length of L2. The distance between the centers of two adjacent apertures is given by

Expanding Eq. (7):

we can define

Since αm and βn ( ) are extremely small angles, equation (9) can be simplified into

Being the image of the aperture stop through L1 and L2, the virtual aperture has the diameter ,

where is the diameter of the aperture stop in the fore optics. In order to avoid overlapping between adjacent virtual apertures, we need to ensure that
Substituting Eqs. (10) and (11) into Eq. (12), we conclude that

3.4. Prism dispersing and spectral imaging

The dispersion process of the imaging spectrometer is based on prism dispersion. An prism array is placed on the plane of the virtual aperture array as shown in Fig. 1, to disperse the light reflected by the image mapper. Meanwhile, an lens array is placed behind the prism array to converge the dispersed light to form spectral images on the sensor. In order to simplify the model, a singlet prism array is implemented. The prisms are the same and hypothetically thin, and thus the light propagation in the prism is not considered.[33] The lens L3 in the lens array is similarly premised. In practice, the singlet prism can be replaced by an Amici prism array for better imaging and dispersing performance.

The strip images reflected by mirrors oriented at the same angle are imaged again on the sensor through a certain virtual aperture. In the spectral imaging system, the mirrors can be taken as slits situated on the object space focal plane of L2 as depicted in the schematic diagram in Fig. 4. The plane coordinate is denoted by for the sensor, where is along the spectral direction. The spectral images of the slits spread parallelly to to form a subimage under the virtual aperture. If λ0 is the central wavelength in the range of , ] and is the deviation angle of the optical axis after the prism, the dispersion near the intersecting position of the optical axis with the sensor is given by

where is the dispersion position of wavelength λ to λ0, is the focal length of L3, and is the prism angular dispersion:
where A is the apex angle and is the refractive index of the prism.

Fig. 4. (color online) Dispersing and reimaging after the image mapper. One sub-lens of the lens array L3 and one prism of the prism array are depicted for clarity. Panel (a) illustrates the imaging direction, where the strip mirror is perpendicular to the plane. Panel (b) illustrates the spectral direction, where the strip mirror is parallel to the plane. Three strip mirrors are depicted for clarity; however, more mirrors are used in a practical system. The L3 and sensor are both tilted for the tilt induced by prism dispersion.[7] Panel (c) shows the spectral data collected on the sensor. represents the dispersion position at and is the spectral spread from to .

The dispersion spread can be deduced from Eq. (15) and expressed as

We let the distance between the adjacent slits be c, and the overlap limitation for their spectral images on the sensor is given by

According to Fig. 1, subimages will be formed on the sensor plane to match the virtual aperture array as shown in Fig. 5. Their centers, denoted as ( , ) where and , have the same values as those of the virtual aperture array, so they are given by

Fig. 5. (color online) Sensor coordinates and subimage central positions.
3.5. IRF of the imaging spectrometer

Considering the layout of the system in Fig. 1, the light wave after the image mapper is imaged on the sensor after passing through the L2, the prism and the L3. According to the above equations, the IRF of the imaging spectrometer at wavelength λ is extended as follows:

where . In Appendix B, we give the detail steps to obtain Eq. (19).

The PRF of the entire system at wavelength λ is represented by , which is the squared modulus of , so we acquire the PRF in Eq. (1) as follows:

The result of the is a function of coordinates , , , and , which describes the light intensity distribution on the image plane from a point source on the object plane, which means that one point source on the object plane determines a unique light intensity distribution on the image plane.

4. Simulation results

In this paper, the imaging simulation is performed by using MATLAB.[34] The structure parameters are shown in Table 1. The prism material is chosen to be SF6, and is determined by the Sellmeier formula.[35]

Table 1.

Structure parameters.

.

The parameters deduced from the Table 1 are listed in Table 2.

Table 2.

Deduced structure parameters.

.

According to Table 1 and Table 2, the amplification of the fore optics is 1, and thus the object field is chosen as ( ) ( mm . Since the amplification of the imaging spectrometer is 0.28125, the width of the spectral image line on sensor is 0.16 mm m. As a result, the pixel size of the sensor is chosen as 7.5 μ m × 7.5 μ m. In order to obtain precise simulation results, we set the sampling pixel size to be 1/3 of the sensor pixel size, so that each element size of the image is 2.5 m for the simulation.

4.1. of the fore optics

The PRF of the fore optics is simulated by using Eq. (3) as demonstrated in Fig. 6. Figures 6(a)6(c) show the 2D distributions of the PRF for 450 nm, 600 nm, and 750 nm, respectively. Figures 6(d)6(f) display the curve shapes of the PRF for the results in panels (a)–(c), respectively.

Fig. 6. (color online) The of the fore optics at (a) 450 nm, (b) 600 nm, and (c) 750 nm, respectively; the PRF curves ((d), (e), and (f)) for panels (a), (b), and (c), respectively.

The diameter of the PRF becomes larger as the wavelength increases. Thus, the spatial resolution of the fore optics is determined by the diameter of the at 750 nm, which is approximately 38 m. It is much smaller than the width 160 m of the strip mirror, which determines the spatial resolution of the imaging spectrometer. The results indicates that the spatial resolution of the IMS is determined by the width of the strip mirror along with system parameters provided in Table 1.

4.2. of the IMS system

Based on the analysis in subSection 3.4 and the parameters listed in Table 1, there are 25 subimages on the sensor and 4 spectral images are included in each subimage at one wavelength. The PRFs at nm, 600 nm, and 750 nm are shown in Fig. 7 respectively. Figure 8 shows the layout of the subimages at nm. Figure 9 shows the spectral images at nm, 500 nm, 550 nm, 600 nm, 650 nm, 700 nm, and 750 nm when the object is a uniform radiation plane.

Fig. 7. (color online) The PRFs at (a) 450 nm, (b) 600 nm, and (c) 750 nm. Panels (d), (e), and (f) show the PRF curves of panels (a), (b), and (c), respectively and the full width at half maximum (FWHM) of each curve.
Fig. 8. (color online) Raw monochromatic image at nm; showing (a) all 25 subimages and (b) a close-up of a single subimage.
Fig. 9. (color online) (a) Raw monochromatic images at wavelengths of 450 nm, 500 nm, 550 nm, 600 nm, 650 nm, 700 nm, and 750 nm, respectively. (b) A close-up of a single subimage. (c) The FWHM of the image line at 600 nm.

The total spectral spread from 450 nm to 750 nm is 1.08 mm which is less than , and thus the spectral images of adjacent slits will not overlap. The mean width of each sliced image line at 600 nm shown in Fig. 9 is about 46.25 m, so the number of spectrally resolvable bands can be approximated as m = 24.3. Therefore, the result is close to the 25 spectral bands, which is the designed value. The calculated center spacing between the adjacent subimages is 4.8 mm both along direction and direction. Meanwhile, the width of a subimage is about 4.5 mm, which is smaller than the spacing. As a result, the slices of object field of view can be imaged separately on the sensor.

4.3. Spectral imaging

As shown in Fig. 10(a), the hyperspectral data cube, which is obtained by the push-broom hyperspectral imager (PHI),[36] is employed for simulation. The single spectral image is defined as a size of 16 mm×16 mm. For finely simulation, we assume that it is sampled by a 1800×1800 grid. In addition, the spectrum range of the data cube is from 450 nm to 750 nm, and 25 spectral bands are chosen for simulation. Figure 10(b) shows the imaging data acquired by the IMS on the sensor.

Fig. 10. (color online) Imaging simulation, showing (a) the hyperspectral data cube, in which the area in the red box is chosen as the object data cube for simulation, and (b) the final spectral imaging data on the sensor of the IMS.

In Fig. 10(b), the long-wavelength spectral bands appear brighter on the detector than the short-wavelength bands for the nonlinear dispersion of the prism. The reconstructed spectral images are obtained by the remapping algorithm[17] which establishes a one-to-one correspondence between each voxel in the data cube and the pixel on the image plane through calibration. The results are shown in Fig. 11.

Fig. 11. Reconstructed images of the (a) 472-nm, (b) 547-nm, and (c) 671-nm monochromatic images.

In order to verify the spectral imaging performance, the spectra of different targets of the reconstructed data cube are compared with the original spectra as shown in Figs. 12(b)12(d). We can find that the spectra of the three points all appear to be lower across the short-wavelength bands and higher across the long-wavelength bands than the original spectra, which is caused by the prism nonlinear dispersion. We conduct the gray scale calibration for the data cube, the calibrated spectra of the three points are shown in Figs. 12(e)12(g).

Fig. 12. (color online) Spectral curves of the reconstructed images at points A, B, and C, respectively.

The reconstructed images exhibit most features of the original object shown in Fig. 11, and the calibrated reconstructed spectra are close to the original spectra as shown in Fig. 12. One hundred object points are chosen to evaluate the spectral angle (SA) between the reconstructed spectral curve and the original one, and the average result is about 0.003 rad. Note that some details are blurred and there is some distortions in the reconstructed spectra, which are mainly caused by the spatial resolution degradations of the reconstructed spectral images, about 1% cross talk between adjacent subimages and the prism nonlinear dispersion.

5. Effects of errors

The image mapper and prism array are difficult to manufacture. The nonuniformity of the mirrors and the prism deteriorates the performance of the spectral imaging. In this section, the effects of these errors are analyzed based on the presented model.

5.1. Tilt angle error of the strip mirror

The image mapper is usually machined by the diamond raster fly cutting method.[7] The common tilt angle error of each facet is about rad rad .[7,31] The actual tilt angles of the mirror are

where are the angle errors. According to Eq. (6), the reflection angle of the light from each strip mirror changes, and thus the virtual aperture centers deviate from the designed values. The deviations cause the pupils to be displaced in the virtual apertures. As a result, the light throughput passing through L3 may decrease and the light leaking into adjacent lenses may increase. The light intensity variation on the L3 plane is calculated to evaluate the influence of the deviation on the light throughput. The light intensity ratio (LIR) is given by
where Ia is the light intensity before L3 and is the light intensity immediately after L3. The higher LIR means higher light intensity on the image plane, which is useful to improve the SNR of the system.[37,38]

The light that leaks from one lens on L3 to the neighboring lenses produces cross talk, which is caused by the diffraction of the image mapper and by some system errors, such as the roughness and scattering on the mirror facets. The cross talk degrades the reconstructed spectral images and causes spectral information to mix between adjacent subimages.

In the simulation, ( ) are assumed to be random errors between 0 mrad and 0.97 mrad ( ). Therefore, random values in the ranges of 0–0.097 mrad ( ), 0.097–0.0193 mrad ( ), …, and 0.873 ( )–0.97 mrad, respectively, are implemented as the values of of each strip mirror tilt angle. Then, the LIR and cross talk are computed through simulations. The results are obtained by calculating the mean value after 5 repetitions at nm.

The results are illustrated in Fig. 13. It shows that the LIR is about 95% and the cross talk is about 0.796% when no error is considered. As the error added to the model increases, the LIR decreases. The LIR of the tilt error drops by about 10% compared with that of no error. This indicates that the tilt error reduces the system light throughput. As illustrated in Fig. 13(b), the cross talk also decreases when the tilt error increases, which indicates that the tilt error has no contribution to the cross talk enhancement. Figure 13(c) shows the cross-section of the same image on the sensor when different tilt errors are considered. The image intensity decreases when the tilt error increases, which fits the curve in Fig. 13(a). In order to avoid reducing the light intensity more than 5%, the tilt angle error should be less than .

Fig. 13. (color online) (a) The LIR variation, (b) the cross talk between the adjacent lens, (c) the intensity of the same spectral image when different tilt errors are considered.
5.2. Apex angle error of the prism

The error of the prism apex angle emerges from the manufacturing process of the prism array. The prism apex angle is expressed as

where is the apex angle error. According to Eqs. (15) and (16), the angular dispersion and spectral spread are affected by this error. In particular, the variation of the spectral spread may cause spectral mixing between adjacent spectral images. The ideal apex angle of the prism is 29.9 , which is chosen to fully utilize the void region between adjacent spectral images to acquire the spectrum from 450 nm to 750 nm. The apex angle error is assumed to be about . According to Eq. (16), it is easy to see that becomes larger when varies from to . An angle error above zero would cause spectral mixing between the adjacent spectral images as illustrated in Fig. 14.

Fig. 14. (color online) Variations of the mixing between the 450-nm image line and the adjacent 750-nm image line as the apex angle error increases.

As the angle error increases, the mixing between the 450-nm image line and the adjacent 750-nm image line becomes significant. When a ” angle error is added to the prism apex angle, the adjacent 750-nm image line has a significant influence on the intensity of the 450-nm one, which causes the 450-nm spectral information to be inaccurate. Therefore, should be below about to avoid spectral mixing. In conclusion, to ensure 25 resolvable spectral bands and avoid spectral mixing simultaneously, the apex angle error should be within the range of 0– .

6. Conclusions

In this research, we establish a theoretical model of the IMS. Simulations based on this model are performed to generate the PRFs and spectral imaging data. Moreover, the mirror tilt angle error and prism apex angle error are analyzed through simulations. The results present the corresponding relation between the mirror facets tilt error and the light intensity variation of the system, which shows that the tilt angle error of the mirror facets causes the light intensity to decrease on the image plane. When manufacturing the image mapper, the mirror tilt angle error should be below to ensure 95% LIR in theory. To avoid spectral mixing, the prism apex angle error should be within a range of 0– .

The presented model can be used to analyze influences of other errors and various factors of the system on the LIR, cross talk, reconstructed image quality and spectral information.[39] In the future work, the roughness or scattering from the strip mirror surfaces and edges can be added to the image mapper reflection model as phase modulating coefficient. The assembly errors of the image mapper and the prism array can be included in the model. The aberration can be added to the model as a modulating coefficient of the wave front.[40] In addition, the sensor model can also be considered to evaluate the influence of the sensor noise.

Reference
[1] Vane G Goetz A F H Wellman J B 1984 IEEE Trans. Geosci. Remote Sens. 22 546
[2] Xiang Li B Yuan Y Lu Q B 2009 Acta Phys. Sin. 58 5399 in Chinese
[3] Li S P Wang L Y Yan B Li L Liu Y J 2012 Chin. Phys. 21 108703
[4] Zhang H M Wang L Y Yan B Li L Xi X Q Lu L Z 2013 Chin. Phys. 22 078701
[5] Qian L L Lv Q B Huang W Xiang Li B 2015 Chin. Phys. 24 080703
[6] Gao L Wang L V 2016 Phys. Rep. 616 1
[7] Gao L Kester R T Tkaczyk T S 2009 Opt. Express 17 12293
[8] Weitzel L Krabbe A Kroker H Thatte N Tacconi-Garman L E Cameron M Genzel R 1996 Astron. Astrophys. Suppl. Ser. 119 531
[9] Murphy T W Soifer B T 1999 Publ. Astron. Soc. Pac. 111 1176
[10] Cook T A Gsell V J Golub J Chakrabarti S 2003 Astrophys. J. 585 1177
[11] Henault F Bacon R Content R Lantz B Laurent F Lemonnier J Morris S 2004 Proc. SPIE 5249 134
[12] Tecza M Thatte N Clarke F Goodsall T Freeman D Salaun Y 2006 Proc. SPIE 6273 62732L
[13] Antichi J Dohlen K Gratton R G Mesa D Claudi R U Giro E Boccaletti A Mouillet D Puget P Beuzit J L 2009 Astrophys. J. 695 1042
[14] Zhang J J Cheng X M Song J Y Bai J M 2011 Astronomical Research & Technology 8 139 in Chinese
[15] Gao D Y Zhao F Qiu P Jiang X J 2012 Astronomical Research & Technology 9 143 in Chinese
[16] Zhang T Y Ji H X Hou Y H Hu Z W Wang L 2015 J. Appl. Opt. 36 531
[17] Kester R T Gao L Hagen N Tkaczyk T S 2010 Opt. Express 18 14330
[18] Kester R T Bedard N Gao L Tkaczyk T S 2011 J. Biomed. Opt. 16 056005
[19] Kester R T Bedard N Tkaczyk T S 2011 Proc. SPIE 8048 289
[20] Gao L Kester R T Tkaczyk T S 2010 Biomedical Optics and 3-D Imaging April 11–14, 2010 Miami, USA BMD8
[21] Gao L Elliott A D Kester R T Bedard N Hagen N Piston D W Tkaczyk T S 2010 Frontiers in Optics October 24–28, 2010 Rochester, USA FML2
[22] Bedard N Schwarz R A Hu A Bhattar V Howe J Williams M D Gillenwater A M Kortum R R Tkaczyk T S 2013 Biomed. Opt. Express 4 938
[23] Kester R T Gao L Bedard N Tkaczyk T S 2010 Proc. SPIE 7555 75550A
[24] Gao L Smith R T Tkaczyk T S 2012 Biomed. Opt. Express 3 48
[25] Hagen N Kester R T Walker C 2012 Proc. SPIE 8358 43
[26] Gao L Bedard N Hagen N Kester R T Tkaczyk T S 2011 Opt. Express 19 17439
[27] Nguyen T Pierce M C Higgins L Tkaczyk T S 2013 Opt. Express 21 13758
[28] Gao L Smith R T 2015 J. Biophotonics 8 441
[29] Bedard N Hagen N Gao L Tkaczyk T S 2012 Opt. Eng. 51 111711
[30] Gao L Tkaczyk T S 2012 Opt. Eng. 51 043203
[31] Kester R T Gao L Tkaczyk T S 2010 Appl. Opt. 49 1886
[32] Goodman J W 2005 Introduction to Fourier Optics 3 New York Roberts and Company Publishers 23
[33] Eismann M T 2012 Hyperspectral Remote Sensing Washington SPIE 314
[34] Voelz D 2011 Computational Fourier Optics: a MATLAB tutorial Washington SPIE 29
[35] Bass M 2010 3 Handbook of Optics 4 New York McGraw-Hill 2.21
[36] Shao H Wang J Y Xue Y Q 1998 J. Remote Sens. 2 251
[37] Bai X Zhang C M Jing C Y Guan X W Cao F Li Y N Xie L L 2011 Acta Phys. Sin. 60 070703 in Chinese
[38] Zhang C M Huang W J Zhao B C 2010 Acta Phys. Sin. 59 5486 in Chinese
[39] Ding X M Yuan Y Su L J 2016 Frontiers in Optics October 17–21, 2016 Rochester, SUA JTh2A.73
[40] Shamir J 2006 Optical Systems and Processes Washington SPIE 106